27 November 2024

Who Let the Bots Out

Attributing State Responsibility Over Violations by Artificial Intelligence Systems

Introduction

As the third revolution in modern-day warfare, artificial intelligence systems are supplementing conventional forms of warfare, such as Israel’s Lavender system which claims to precisely identify and target Hamas combatants or Ukraine’s use of Clearview AI to identify Russian targets from social media and satellite feeds. A parallel interest in AI-governance has thus boomed, demanding the “responsible development, deployment, and use of AI”. For instance, the Political Declaration on the Responsible Military Use of Artificial Intelligence and Autonomy, subscribed by 57 states (as of October 2024), imposes soft obligations upon states to maintain their AI-use “consistent with their respective obligations under international law”.

The question of the responsible use of AI naturally leads to that of responsibility for AI, namely, upon whom international responsibility accrues when AI systems violate international obligations. In 2022, the Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Systems stressed that “every internationally wrongful act of a state, including those […] in the area of LAWS entails international responsibility of that state”. The Blueprint for Action, produced in conclusion to the 2024 “Responsible Use of Artificial Intelligence in the Military Domain” summit, also affirmed that “[h]umans remain responsible and accountable for their use and effects of Al applications in the military domain, and responsibility and accountability can never be transferred to machines”.

However, scholars such as Jaemin Lee consider the matter unsettled:

Finding state responsibility in the context of AI under international law will be dependent upon the final conclusion of this question of the nature of AI.

Essentially, he argues that increasingly autonomous AI systems may eventually exceed human control (enough to demand legal personhood), such that it would be problematic to impose responsibility on the creator/owner who had no knowledge or control of the system’s actions. In this note, I argue against this proposition, positing that state responsibility for violations of current and future AI systems may be established through direct or indirect attribution within the existing framework.

State Responsibility for AI Systems

In 2001, the International Law Commission (“ILC”) concluded the Articles on Responsibility of States for Internationally Wrongful Acts (“ARSIWA”) which authoritatively contain the regime of state responsibility. While the ARSIWA reflects, in most parts, the customary secondary rules that regulate the consequences of internationally wrongful acts and omissions, one must remember that the ARSIWA is not representative of the entire regime. Until now, based on practice and reliance placed on the ARSIWA, little, if any, of the regime was considered to be lying outside the ARSIWA.

Responsibility under the ARSIWA is premised on two fundamental concepts: (1) attributability to the state and (2) the existence of a breach of an international obligation. In its commentary, the ILC explicitly stated that, unlike the general structure of domestic liability, “factual causation” was not a basis for state responsibility. State responsibility is thus conceived as a fault-agnostic regime. Another fundamental feature of state responsibility is that attribution is premised upon human conduct, whether performed individually or collectively. The ARSIWA’s references to “organs”, “entities”, and “groups of persons” are each meant to identify the human individual behind the conduct. This is natural, given that the ARSIWA was conceptualized in, and relied on sources, from a time when artificial intelligence as an autonomous being was science fiction. Thus, it would not seem to follow, as Lee argues, that states’ allowance to determine their “organs”, pursuant to their internal laws, under Article 4 of the ARSIWA could include autonomous AI systems. Nevertheless, we may consider responsibility for the acts of AI systems flowing through corresponding human acts of deployment and use.

In this context, Bérénice Boutin echoes Lee’s argument, basing attribution on the nature of the AI system deployed. In three cases – (1) when humans direct control over AI systems, (2) when systems operate autonomously within a defined mandate but are subject to human override, and (3) when systems simply perform an assistive function to human decision-making – Boutin finds the conduct attributable to the state through its military commander’s decision to deploy such a system. A fundamental error in her analysis, however, is the reliance on causality as the underlying basis, a factor irrelevant to the law of state responsibility. Thus, in a fourth scenario – that of a fully autonomous system (“FAS”) (Lee’s artificial legal person) – where causality is found vague and tenuous when the system commits a violation going beyond its mandate, Boutin absolves the state of international responsibility. In response, scholars like Paola Gaeta extend Boutin’s original criteria to FAS, arguing that even their conduct may be similarly attributable to the state, given their initial deployment by a state actor.

However, a deeper issue is amiss. Under Articles 4 and 5, 8 of the ARSIWA, any act of a state organ or a person or entity entrusted with state authority or directed and controlled by the state respectively is attributable to the state. Article 7 further provides that acts of the above entities will still be attributable to the state even if they act beyond the granted authority or in contravention of an express command. Note that each of these entities comprises, at the most, the human being (or commander) deploying, using, and controlling the AI system. Thus, Boutin’s difficulty in attributing acts of FAS is understandable, since FAS are anthropomorphic but not actually human as required by the ARSIWA. Article 7 does not offer any assistance here as it would again relate only to the approving commander and not the AI system itself.

Some solutions have been proposed to this problem. One of these is Lee’s argument that “entity” or “organ” under the ARSIWA may encompass AI systems such that attribution can be found if state authority is delegated to the system (p. 180). The current human-centric conceptions of the ARSIWA do not support such arguments, and it must be admitted, as Magdalena Pacholska does, that such a formulation is entirely de lege ferenda. In advancing this solution, Pacholska and others like Samuli Haataja, however, fail to provide a basis for the formulation, arguing instead that there is no prohibition in adopting Lee’s solution since it serves a functional purpose. What the argument requires for survival is some basis in the law of international responsibility, and likely one that lies in the abyss outside the anthropocentric ARSIWA. For now, another solution is presented below.

States’ Obligation to Protect

As opposed to direct attributability, consider the positive obligation “not to allow knowingly its territory to be used for acts contrary to the rights of other States”. We may turn here to the obligation to protect human rights within state borders and to the principle of transboundary harm across state borders. The former obligation provides that states refrain and prevent private persons from committing violations of human rights (El-Masri v. Macedonia, para. 198) and the latter principle provides that states ensure that activities within their jurisdiction and control do not cause harm to other states or areas beyond national control (Nuclear Weapons Advisory Opinion, para. 29). Similarly, a state may be obligated to prevent any harm caused by rogue FAS, with the violation being attributed to state failure. This obligation could have been specifically codified under the 2024 Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, whereby state parties must “take or maintain the necessary measures to ensure that activities within the life cycle of [AI] systems are consistent with its obligations to protect human rights”, had there not been exceptions carved out for national defence and national security. The Framework Convention also applies to FAS since “artificial intelligence systems” are defined as “systems vary[ing] in their levels of autonomy and adaptiveness after deployment”. Nonetheless, reliance can be placed upon an extrapolation of the general human rights obligations and principle mentioned above.

Admittedly, some weaknesses of the solution are still readily apparent. First, these are due diligence obligations and are governed by factors such as the nature of the activity, scientific knowledge surrounding it, and the state’s capability at the time. States may plead that best efforts were employed, hide behind the unforseeability of the black box problem, or invoke force majeure as a circumstance precluding wrongfulness. While the argument will likely stand, state will, at a minimum, be required to demonstrate that a domestic regulation was in place. A second weakness is that transboundary harm is largely defined as harm “through […] physical consequences”, restricting the applicability of the norm for cyber-related AI-driven harm. Recent scholarship on prevention obligations in cyberspace is promising, but state responsibility in this regard remains unsettled.

Conclusion

It is true that law is both slow but must not act too fast with respect to technology, lest key protection issues acquire a greater escape velocity than law’s formulation. Since FAS do not currently exist, it appears that the current regime of state responsibility is sufficient to answer problems posed by AI systems today. I do not, however, completely disagree with Lee’s statement. There may come a time when AI systems breach a threshold of individuality and manoeuvrability within borderless cyberspace that their attribution to a state, in terms of indigeneity or control, is questionable. That time has not yet come. When it does, I do not expect that the legal or scientific community would move towards a “final” conclusion on the nature of AI systems, or indeed the law applicable to it. Law and technology will then continue their cosmic dance, to borrow from Hindu mythology, creating, destroying, and recreating each other ad infinitum.

The author would like to thank Prof. Patrícia Galvão Teles whose guidance and lectures at the Graduate Institute of International and Development Studies, Geneva, greatly benefitted this piece.


SUGGESTED CITATION  Gupta, Rohit: Who Let the Bots Out: Attributing State Responsibility Over Violations by Artificial Intelligence Systems, VerfBlog, 2024/11/27, https://verfassungsblog.de/ai-laws-drones-autonomous-weapons-state-responsibility/, DOI: 10.59704/5a3e5e1689e9ffb3.

One Comment

  1. Nehal Bhuta Thu 28 Nov 2024 at 12:28 - Reply

    I agree that the due diligence principle is relevant, and the analogy with transboundary harm. Stavros Pantazopoulos and I developed that argument in our 2016 chapter on Autonomy and Uncertainty:
    https://www.research.ed.ac.uk/en/publications/autonomy-and-uncertainty-increasingly-autonomous-weapons-systems-

Leave A Comment

WRITE A COMMENT

1. We welcome your comments but you do so as our guest. Please note that we will exercise our property rights to make sure that Verfassungsblog remains a safe and attractive place for everyone. Your comment will not appear immediately but will be moderated by us. Just as with posts, we make a choice. That means not all submitted comments will be published.

2. We expect comments to be matter-of-fact, on-topic and free of sarcasm, innuendo and ad personam arguments.

3. Racist, sexist and otherwise discriminatory comments will not be published.

4. Comments under pseudonym are allowed but a valid email address is obligatory. The use of more than one pseudonym is not allowed.